222 research outputs found
The Development of MSFC Usability Lab
This conference poster reviews the development of the usability lab at Marshall Space Flight Center. The purpose of the lab was to integrate a fully functioning usability laboratory to provide a resource for future human factor assessments. and to implement preliminary usability testing on a MSFC website to validate the functionality of the lab
Integration of MSFC Usability Lab with Usability Testing
As part of the Stage Analysis Branch, human factors engineering plays an important role in relating humans to the systems of hardware and structure designs of the new launch vehicle. While many branches are involved in the technical aspects of creating a launch vehicle, human factors connects humans to the scientific systems with the goal of improving operational performance and safety while reducing operational error and damage to the hardware. Human factors engineers use physical and computerized models to visualize possible areas for improvements to ensure human accessibility to components requiring maintenance and that the necessary maintenance activities can be accomplished with minimal risks to human and hardware. Many methods of testing are used to fulfill this goal, such as physical mockups, computerized visualization, and usability testing. In this analysis, a usability test is conducted to test how usable a website is to users who are and are not familiar with it. The testing is performed using participants and Morae software to record and analyze the results. This analysis will be a preliminary test of the usability lab in preparation for use in new spacecraft programs, NASA Enterprise, or other NASA websites. The usability lab project is divided into two parts: integration of the usability lab and a preliminary test of the usability lab
Optical ReLU-like Activation Function Based on a Semiconductor Laser with Optical Injection
Artificial neural networks usually consist of successive linear
multiply-accumulate operations and nonlinear activation functions. However,
most optical neural networks only achieve the linear operation in the optical
domain, while the optical implementation of activation function remains
challenging. Here we present an optical ReLU-like activation function based on
a semiconductor laser subject to the optical injection in experiment. The
ReLU-like function is achieved in a broad regime above the Hopf bifurcation of
the injection-locking diagram. In particular, the slope of the activation
function is reconfigurable by tuning the frequency difference between the
master laser and the slave laser
NewsDialogues: Towards Proactive News Grounded Conversation
Hot news is one of the most popular topics in daily conversations. However,
news grounded conversation has long been stymied by the lack of well-designed
task definition and scarce data. In this paper, we propose a novel task,
Proactive News Grounded Conversation, in which a dialogue system can
proactively lead the conversation based on some key topics of the news. In
addition, both information-seeking and chit-chat scenarios are included
realistically, where the user may ask a series of questions about the news
details or express their opinions and be eager to chat. To further develop this
novel task, we collect a human-to-human Chinese dialogue dataset
\ts{NewsDialogues}, which includes 1K conversations with a total of 14.6K
utterances and detailed annotations for target topics and knowledge spans.
Furthermore, we propose a method named Predict-Generate-Rank, consisting of a
generator for grounded knowledge prediction and response generation, and a
ranker for the ranking of multiple responses to alleviate the exposure bias. We
conduct comprehensive experiments to demonstrate the effectiveness of the
proposed method and further present several key findings and challenges to
prompt future research.Comment: Accepted to ACL 2023 Conference (Long Paper; Findings
MultiZoo & MultiBench: A Standardized Toolkit for Multimodal Deep Learning
Learning multimodal representations involves integrating information from
multiple heterogeneous sources of data. In order to accelerate progress towards
understudied modalities and tasks while ensuring real-world robustness, we
release MultiZoo, a public toolkit consisting of standardized implementations
of > 20 core multimodal algorithms and MultiBench, a large-scale benchmark
spanning 15 datasets, 10 modalities, 20 prediction tasks, and 6 research areas.
Together, these provide an automated end-to-end machine learning pipeline that
simplifies and standardizes data loading, experimental setup, and model
evaluation. To enable holistic evaluation, we offer a comprehensive methodology
to assess (1) generalization, (2) time and space complexity, and (3) modality
robustness. MultiBench paves the way towards a better understanding of the
capabilities and limitations of multimodal models, while ensuring ease of use,
accessibility, and reproducibility. Our toolkits are publicly available, will
be regularly updated, and welcome inputs from the community.Comment: JMLR Open Source Software 2023, Code available at
https://github.com/pliang279/MultiBenc
- …